70 research outputs found

    A Kernelisation Approach for Multiple d-Hitting Set and Its Application in Optimal Multi-Drug Therapeutic Combinations

    Get PDF
    Therapies consisting of a combination of agents are an attractive proposition, especially in the context of diseases such as cancer, which can manifest with a variety of tumor types in a single case. However uncovering usable drug combinations is expensive both financially and temporally. By employing computational methods to identify candidate combinations with a greater likelihood of success we can avoid these problems, even when the amount of data is prohibitively large. Hitting Set is a combinatorial problem that has useful application across many fields, however as it is NP-complete it is traditionally considered hard to solve exactly. We introduce a more general version of the problem (α,β,d)-Hitting Set, which allows more precise control over how and what the hitting set targets. Employing the framework of Parameterized Complexity we show that despite being NP-complete, the (α,β,d)-Hitting Set problem is fixed-parameter tractable with a kernel of size O(αdkd) when we parameterize by the size k of the hitting set and the maximum number α of the minimum number of hits, and taking the maximum degree d of the target sets as a constant. We demonstrate the application of this problem to multiple drug selection for cancer therapy, showing the flexibility of the problem in tailoring such drug sets. The fixed-parameter tractability result indicates that for low values of the parameters the problem can be solved quickly using exact methods. We also demonstrate that the problem is indeed practical, with computation times on the order of 5 seconds, as compared to previous Hitting Set applications using the same dataset which exhibited times on the order of 1 day, even with relatively relaxed notions for what constitutes a low value for the parameters. Furthermore the existence of a kernelization for (α,β,d)-Hitting Set indicates that the problem is readily scalable to large datasets

    A framework for convection and boundary layer parameterization derived from conditional filtering

    Get PDF
    A new theoretical framework is derived for parameterization of subgrid physical processes in atmospheric models; the application to parameterization of convection and boundary layer fluxes is a particular focus. The derivation is based on conditional filtering, which uses a set of quasi-Lagrangian labels to pick out different regions of the fluid, such as convective updrafts and environment, before applying a spatial filter. This results in a set of coupled prognostic equations for the different fluid components, including subfilter-scale flux terms and entrainment/detrainment terms. The framework can accommodate different types of approaches to parameterization, such as local turbulence approaches and mass-flux approaches. It provides a natural way to distinguish between local and nonlocal transport processes, and makes a clearer conceptual link to schemes based on coherent structures such as convective plumes or thermals than the straightforward application of a filter without the quasi-Lagrangian labels. The framework should facilitate the unification of different approaches to parameterization by highlighting the different approximations made, and by helping to ensure that budgets of energy, entropy, and momentum are handled consistently and without double counting. The framework also points to various ways in which traditional parameterizations might be extended, for example by including additional prognostic variables. One possibility is to allow the large-scale dynamics of all the fluid components to be handled by the dynamical core. This has the potential to improve several aspects of convection-dynamics coupling, such as dynamical memory, the location of compensating subsidence, and the propagation of convection to neighboring grid columns

    Dendritic Cell Subtypes from Lymph Nodes and Blood Show Contrasted Gene Expression Programs upon Bluetongue Virus Infection

    Get PDF
    Chantier qualité GAHuman and animal hemorrhagic viruses initially target dendritic cells (DCs). It has been proposed, but not documented, that both plasmacytoid DCs (pDCs) and conventional DCs (cDCs) may participate in the cytokine storm encountered in these infections. In order to evaluate the contribution of DCs in hemorrhagic virus pathogenesis, we performed a genome-wide expression analysis during infection by Bluetongue virus (BTV), a double-stranded RNA virus that induces hemorrhagic fever in sheep and initially infects cDCs. Both pDCs and cDCs accumulated in regional lymph nodes and spleen during BTV infection. The gene response profiles were performed at the onset of the disease and markedly differed with the DC subtypes and their lymphoid organ location. An integrative knowledge-based analysis revealed that blood pDCs displayed a gene signature related to activation of systemic inflammation and permeability of vasculature. In contrast, the gene profile of pDCs and cDCs in lymph nodes was oriented to inhibition of inflammation, whereas spleen cDCs did not show a clear functional orientation. These analyses indicate that tissue location and DC subtype affect the functional gene expression program induced by BTV and suggest the involvement of blood pDCs in the inflammation and plasma leakage/hemorrhage during BTV infection in the real natural host of the virus. These findings open the avenue to target DCs for therapeutic interventions in viral hemorrhagic diseases

    A Transcription Factor Map as Revealed by a Genome-Wide Gene Expression Analysis of Whole-Blood mRNA Transcriptome in Multiple Sclerosis

    Get PDF
    Background: Several lines of evidence suggest that transcription factors are involved in the pathogenesis of Multiple Sclerosis (MS) but complete mapping of the whole network has been elusive. One of the reasons is that there are several clinical subtypes of MS and transcription factors that may be involved in one subtype may not be in others. We investigate the possibility that this network could be mapped using microarray technologies and contemporary bioinformatics methods on a dataset derived from whole blood in 99 untreated MS patients (36 Relapse Remitting MS, 43 Primary Progressive MS, and 20 Secondary Progressive MS) and 45 age-matched healthy controls. Methodology/Principal Findings: We have used two different analytical methodologies: a non-standard differential expression analysis and a differential co-expression analysis, which have converged on a significant number of regulatory motifs that are statistically overrepresented in genes that are either differentially expressed (or differentially co-expressed) in cases and controls (e.g., VKROXQ6,pvalue,3.31E6;VKROX_Q6, p-value ,3.31E-6; VCREBP1_Q2, p-value ,9.93E-6, V$YY1_02, p-value ,1.65E-5). Conclusions/Significance: Our analysis uncovered a network of transcription factors that potentially dysregulate several genes in MS or one or more of its disease subtypes. The most significant transcription factor motifs were for the Early Growth Response EGR/KROX family, ATF2, YY1 (Yin and Yang 1), E2F-1/DP-1 and E2F-4/DP-2 heterodimers, SOX5, and CREB and ATF families. These transcription factors are involved in early T-lymphocyte specification and commitment as well as in oligodendrocyte dedifferentiation and development, both pathways that have significant biological plausibility in MS causation

    Structural constraints revealed in consistent nucleosome positions in the genome of S. cerevisiae

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recent advances in the field of high-throughput genomics have rendered possible the performance of genome-scale studies to define the nucleosomal landscapes of eukaryote genomes. Such analyses are aimed towards providing a better understanding of the process of nucleosome positioning, for which several models have been suggested. Nevertheless, questions regarding the sequence constraints of nucleosomal DNA and how they may have been shaped through evolution remain open. In this paper, we analyze in detail different experimental nucleosome datasets with the aim of providing a hypothesis for the emergence of nucleosome-forming sequences.</p> <p>Results</p> <p>We compared the complete sets of nucleosome positions for the budding yeast (<it>Saccharomyces cerevisiae</it>) as defined in the output of two independent experiments with the use of two different experimental techniques. We found that < 10% of the experimentally defined nucleosome positions were consistently positioned in both datasets. This subset of well-positioned nucleosomes, when compared with the bulk, was shown to have particular properties at both sequence and structural levels. Consistently positioned nucleosomes were also shown to occur preferentially in pairs of dinucleosomes, and to be surprisingly less conserved compared with their adjacent nucleosome-free linkers.</p> <p>Conclusion</p> <p>Our findings may be combined into a hypothesis for the emergence of a weak nucleosome-positioning code. According to this hypothesis, consistent nucleosomes may be partly guided by nearby nucleosome-free regions through statistical positioning. Once established, a set of well-positioned consistent nucleosomes may impose secondary constraints that further shape the structure of the underlying DNA. We were able to capture these constraints through the application of a recently introduced structural property that is related to the symmetry of DNA curvature. Furthermore, we found that both consistently positioned nucleosomes and their adjacent nucleosome-free regions show an increased tendency towards conservation of this structural feature.</p

    Effects of fluoxetine on functional outcomes after acute stroke (FOCUS): a pragmatic, double-blind, randomised, controlled trial

    Get PDF
    Background Results of small trials indicate that fluoxetine might improve functional outcomes after stroke. The FOCUS trial aimed to provide a precise estimate of these effects. Methods FOCUS was a pragmatic, multicentre, parallel group, double-blind, randomised, placebo-controlled trial done at 103 hospitals in the UK. Patients were eligible if they were aged 18 years or older, had a clinical stroke diagnosis, were enrolled and randomly assigned between 2 days and 15 days after onset, and had focal neurological deficits. Patients were randomly allocated fluoxetine 20 mg or matching placebo orally once daily for 6 months via a web-based system by use of a minimisation algorithm. The primary outcome was functional status, measured with the modified Rankin Scale (mRS), at 6 months. Patients, carers, health-care staff, and the trial team were masked to treatment allocation. Functional status was assessed at 6 months and 12 months after randomisation. Patients were analysed according to their treatment allocation. This trial is registered with the ISRCTN registry, number ISRCTN83290762. Findings Between Sept 10, 2012, and March 31, 2017, 3127 patients were recruited. 1564 patients were allocated fluoxetine and 1563 allocated placebo. mRS data at 6 months were available for 1553 (99·3%) patients in each treatment group. The distribution across mRS categories at 6 months was similar in the fluoxetine and placebo groups (common odds ratio adjusted for minimisation variables 0·951 [95% CI 0·839–1·079]; p=0·439). Patients allocated fluoxetine were less likely than those allocated placebo to develop new depression by 6 months (210 [13·43%] patients vs 269 [17·21%]; difference 3·78% [95% CI 1·26–6·30]; p=0·0033), but they had more bone fractures (45 [2·88%] vs 23 [1·47%]; difference 1·41% [95% CI 0·38–2·43]; p=0·0070). There were no significant differences in any other event at 6 or 12 months. Interpretation Fluoxetine 20 mg given daily for 6 months after acute stroke does not seem to improve functional outcomes. Although the treatment reduced the occurrence of depression, it increased the frequency of bone fractures. These results do not support the routine use of fluoxetine either for the prevention of post-stroke depression or to promote recovery of function. Funding UK Stroke Association and NIHR Health Technology Assessment Programme

    Policy Transfer with a Relational Learning Classifier System ABSTRACT

    No full text
    Policy transfer occurs when a system transfers a policy learnt for one task to another task with little or no retraining, and allows a system to perform robustly and learn efficiently, especially when the new task is more complex than the original task. In this paper we report on work in progress into policy transfer using a relational learning classifier system. The system, Fox-cs, uses a high level relational language (a subset first order logic) in combination with a P-learning technique adapted for Xcs and its derivatives. Fox-cs achieved successful policy transfer in two blocks world tasks, stacking and onab, by learning a policy that was independent of the number of blocks, thus avoiding the prohibitive training times that would normally arise due to the exponential explosion in the number of states as the number of blocks increases

    A population-based approach to finding the matchset of a learning classifier system efficiently

    No full text
    Profiling of the learning classifier system XCS [11] has revealed that its execution time tends to be dominated by rule matching [8], it is therefore important for rule matching to be efficient. To date, the fastest speedups for matching have been achieved by exploiting parallelism [8], but efficient sequential approaches, such as bitset and “specificity” matching [2], can be utilised if there is no platform support for the vector instruction sets that [8] employs. Previous sequential approaches have focussed on improving the efficiency of matching individual rules; in this paper, we introduce a population-based approach that partially matches many rules simultaneously. This is achieved by maintaining the rule-base in a rooted 3-ary tree over which a backtracking depthfirst search is run to find the matchset. We found that the method generally outperformed standard and specificity matching on raw matching and on several benchmarking tasks. While the bitset approach attained the best speedups on the benchmarking tasks, we give an analysis that shows that it can be the least efficient of the approaches on long rule conditions. A limitation of the new method is that it is inefficient when the proportion of “don’t care” symbols in the rule conditions is very large, which could perhaps be remedied by combining the method with the specificity technique
    corecore